由于GDPR于2018年5月生效以来,公司已经致力于他们的数据实践来遵守本隐私法。特别是,由于隐私政策是用户理解和控制隐私的基本沟通渠道,因此许多公司在强制执行GDPR后更新了他们的隐私政策。但是,大多数隐私政策都是详细的,充满了术语,并模糊地描述了公司的数据实践和用户权利。因此,如果他们符合GDPR,则目前尚不清楚。在本文中,我们创建了一个包含18个GDPR要求的1,080个网站的隐私政策数据集,并开发了一种基于卷积神经网络(CNN)的模型,可以将隐私政策分类为89.2%。我们应用我们的模型,以对隐私政策的合规性进行测量。我们的结果表明,即使在GDPR生效之后,即使在GDPR生效之后,97%的网站仍然无法遵守GDPR的至少一个要求。
translated by 谷歌翻译
Covid-19影响了世界各地,尽管对爆发的错误信息的传播速度比病毒更快。错误的信息通过在线社交网络(OSN)传播,通常会误导人们遵循正确的医疗实践。特别是,OSN机器人一直是传播虚假信息和发起网络宣传的主要来源。现有工作忽略了机器人的存在,这些机器人在传播中充当催化剂,并专注于“帖子中共享的文章”而不是帖子(文本)内容中的假新闻检测。大多数关于错误信息检测的工作都使用手动标记的数据集,这些数据集很难扩展以构建其预测模型。在这项研究中,我们通过在Twitter数据集上使用经过验证的事实检查的陈述来标记数据来克服这一数据稀缺性挑战。此外,我们将文本功能与用户级功能(例如关注者计数和朋友计数)和推文级功能(例如Tweet中的提及,主题标签和URL)结合起来,以充当检测错误信息的其他指标。此外,我们分析了推文中机器人的存在,并表明机器人随着时间的流逝改变了其行为,并且在错误信息中最活跃。我们收集了1022万个Covid-19相关推文,并使用我们的注释模型来构建一个广泛的原始地面真实数据集以进行分类。我们利用各种机器学习模型来准确检测错误信息,我们的最佳分类模型达到了精度(82%),召回(96%)和假阳性率(3.58%)。此外,我们的机器人分析表明,机器人约为错误信息推文的10%。我们的方法可以实质性地暴露于虚假信息,从而改善了通过社交媒体平台传播的信息的可信度。
translated by 谷歌翻译
最近已经提出了许多用于对计算上昂贵问题进行多目标优化的方法。通常,每个目标的概率替代物是由初始数据集构建的。然后,替代物可用于在目标空间中为任何解决方案产生预测密度。使用预测密度,我们可以根据解决方案来计算预期的超量改进(EHVI)。使EHVI最大化,我们可以找到接下来可能会缴纳的最有希望的解决方案。有用于计算EHVI的封闭式表达式,并在多元预测密度上整合。但是,它们需要分区目标空间,对于三个以上的目标而言,这可能会非常昂贵。此外,对于预测密度依赖的问题,没有封闭形式的表达式,可以捕获目标之间的相关性。在这种情况下,使用蒙特卡洛近似值,这并不便宜。因此,仍然需要开发新的准确但便宜的近似方法。在这里,我们研究了使用高斯 - 温石正交近似EHVI的替代方法。我们表明,对于独立和相关的预测密度,对于一系列流行的测试问题,它可以是蒙特卡洛的准确替代品。
translated by 谷歌翻译
许多昂贵的黑匣子优化问题对其输入敏感。在这些问题中,定位一个良好的设计区域更有意义,而不是一个可能的脆弱的最佳设计。昂贵的黑盒功能可以有效地优化贝叶斯优化,在那里高斯过程是在昂贵的功能之前的流行选择。我们提出了一种利用贝叶斯优化的强大优化方法,找到一种设计空间区域,其中昂贵的功能的性能对输入相对不敏感,同时保持质量好。这是通过从正在建模昂贵的功能的高斯进程的实现来实现这一点,并评估每个实现的改进。这些改进的期望可以用进化算法廉价地优化,以确定评估昂贵功能的下一个位置。我们描述了一个有效的过程来定位最佳预期改进。我们凭经验展示了评估候选不确定区域的昂贵功能的昂贵功能,该模型最不确定,或随机地产生最佳收敛与利用方案相比。我们在两个,五个和十个维度中说明了我们的六个测试功能的方法,并证明它能够优于来自文献的两种最先进的方法。我们还展示了我们的方法在4和8维中展示了两个真实问题,这涉及训练机器人臂,将物体推到目标上。
translated by 谷歌翻译
Recent work has shown the benefits of synthetic data for use in computer vision, with applications ranging from autonomous driving to face landmark detection and reconstruction. There are a number of benefits of using synthetic data from privacy preservation and bias elimination to quality and feasibility of annotation. Generating human-centered synthetic data is a particular challenge in terms of realism and domain-gap, though recent work has shown that effective machine learning models can be trained using synthetic face data alone. We show that this can be extended to include the full body by building on the pipeline of Wood et al. to generate synthetic images of humans in their entirety, with ground-truth annotations for computer vision applications. In this report we describe how we construct a parametric model of the face and body, including articulated hands; our rendering pipeline to generate realistic images of humans based on this body model; an approach for training DNNs to regress a dense set of landmarks covering the entire body; and a method for fitting our body model to dense landmarks predicted from multiple views.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
Deep neural networks (DNNs) are vulnerable to a class of attacks called "backdoor attacks", which create an association between a backdoor trigger and a target label the attacker is interested in exploiting. A backdoored DNN performs well on clean test images, yet persistently predicts an attacker-defined label for any sample in the presence of the backdoor trigger. Although backdoor attacks have been extensively studied in the image domain, there are very few works that explore such attacks in the video domain, and they tend to conclude that image backdoor attacks are less effective in the video domain. In this work, we revisit the traditional backdoor threat model and incorporate additional video-related aspects to that model. We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically, leading to highly effective attacks in the video domain. In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain. And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models, where we show that attacking a single modality is enough for achieving a high attack success rate.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译